Imposition of a lasso penalty shrinks parameter estimates toward zero andperforms continuous model selection. Lasso penalized regression is capable ofhandling linear regression problems where the number of predictors far exceedsthe number of cases. This paper tests two exceptionally fast algorithms forestimating regression coefficients with a lasso penalty. The previously known$\ell_2$ algorithm is based on cyclic coordinate descent. Our new $\ell_1$algorithm is based on greedy coordinate descent and Edgeworth's algorithm forordinary $\ell_1$ regression. Each algorithm relies on a tuning constant thatcan be chosen by cross-validation. In some regression problems it is natural togroup parameters and penalize parameters group by group rather than separately.If the group penalty is proportional to the Euclidean norm of the parameters ofthe group, then it is possible to majorize the norm and reduce parameterestimation to $\ell_2$ regression with a lasso penalty. Thus, the existingalgorithm can be extended to novel settings. Each of the algorithms discussedis tested via either simulated or real data or both. The Appendix proves that agreedy form of the $\ell_2$ algorithm converges to the minimum value of theobjective function.
展开▼